A Common Messaging Layer for MPI and PVM over SCI
نویسندگان
چکیده
This paper describes the design of a common message passing layer for implementing both MPI and PVM over the SCI interconnect in a workstation or PC cluster. The design is focused at obtaining low latency. The message layer encapsulates all necessary knowledge of the underlying interconnect and operating system. Yet, we claim that it can be used to implement such diierent message passing libraries as MPI and PVM without sacriicing eeciency. Initial results obtained from using the message layer in SCI clusters are presented. 1 Background and Motivation For several years now, workstation or PC clusters have been in use as highly cost-eeective platforms for parallel computing. However, to become real alternatives to expensive dedicated parallel machines, clusters must be equipped with high-speed interconnects and a suuciently complete software basis supporting parallel programming and enabling application porting with relative ease. The Scalable Coherent Interface (SCI) is a promising interconnect technology for building such clusters, facilitating high-throughput and, most importantly, low-latency communication in a cluster environment 7]. While hardware (SCI adapter cards and switches) and basic device driver software for SCI-based clusters have been on the market for about three years, standard parallel programming APIs over SCI have not yet become available. The ESPRIT HPCN project SISCI (Standard Software Infrastructure for SCI-based Parallel Systems) aims at establishing this missing link, by implementing and evaluating the following formal and de facto standard software environments on SCI clusters: i) the Message Passing Interface communication library (MPI); ii) the Parallel Virtual Machine parallel programming system (PVM); and iii) a POSIX compliant, distributed thread package (Pthreads). These packages will be evaluated by a number of demanding parallel applications. More information on SISCI is available in 3] and from 18].
منابع مشابه
Fast Communication Libraries on an SCI Cluster
| In this paper, we describe three fast communication libraries that we have developed for a cluster with an SCI interconnect: (1) an implementation of Active Messages ; (2) a library which exports the Berkeley Sockets API, but, in contrast to the conventional TCP/IP protocol suite, employs lightweight, user-level communication mechanisms internally; (3) and a messaging layer designed as a comm...
متن کاملSisci | Implementing a Standard Software Infrastructure on an Sci Cluster
To enable the eecient utilization of clusters of workstations it is crucial to develop a stable and rich software infrastructure. The ESPRIT Project SISCI will provide two widely used message{passing interfaces, MPI and PVM, as well as a POSIX compliant , distributed thread package (Pthreads) on multiple SCI{based clusters. This paper features motivation and background on this projects as well ...
متن کاملIntermediate report on the adaptation of NT - PVM to SCI mechanisms
This report describes the implementation of the Common Message Passing Layer (CML) for the SISCI PC cluster, as far as developed further from the previous report D.2.1.1, and presents the results achieved with this implementation. As CML functionality is integrated into an existing implementation of PVM, we will achieve immediate full functionality of the PVM system over SCI as soon as the amal...
متن کاملTEG: A High-Performance, Scalable, Multi-network Point-to-Point Communications Methodology
TEG is a new component-based methodology for point-to-point messaging. Developed as part of the Open MPI project, TEG provides a configurable fault-tolerant capability for high-performance messaging that utilizes multi-network interfaces where available. Initial performance comparisons with other MPI implementations show comparable ping-pong latencies, but with bandwidths up
متن کاملComparative analysis of PVM and MPI for the development of physical applications on parallel clusters∗
PVM and MPI, two systems for programming clusters, are often compared. Each system has its unique strengths and this will remain so into the foreseeable future. This paper compares PVM and MPI features, pointing out the situations where one may be favored over the other; it explains the differences between these systems and the reasons for such differences. PVM – Parallel Virtual Machine; MPI –...
متن کامل